Incremental Cross-view Mutual Distillation for Self-supervised Medical CT Synthesis

👤 Chaowei Fang
📅 Last updated on May 22, 2024
CVPR
Incremental Cross-view Framework

Problem Statement

Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy to accomplish this task in the self-supervised learning manner.

Proposed Method

Specifically, we model this problem from three different views:

1. Slice-wise interpolation from axial view
2. Pixel-wise interpolation from coronal view
3. Pixel-wise interpolation from sagittal view

Under this circumstance, the models learned from different views can distill valuable knowledge to guide the learning processes of each other. We can repeat this process to make the models synthesize intermediate slice data with increasing inter-slice resolution.

Key Advantages

Self-supervised Learning: The proposed method does not require ground-truth intermediate slices, making it highly practical for clinical applications where such data is typically unavailable.

Cross-view Knowledge Distillation: By leveraging multiple viewing perspectives (axial, coronal, and sagittal), the method achieves more robust and accurate slice synthesis through mutual knowledge transfer.

Incremental Resolution Enhancement: The iterative refinement process allows for progressively higher inter-slice resolution, improving the quality of 3D medical imaging reconstructions.